10 research outputs found
4D Seismic History Matching Incorporating Unsupervised Learning
The work discussed and presented in this paper focuses on the history
matching of reservoirs by integrating 4D seismic data into the inversion
process using machine learning techniques. A new integrated scheme for the
reconstruction of petrophysical properties with a modified Ensemble Smoother
with Multiple Data Assimilation (ES-MDA) in a synthetic reservoir is proposed.
The permeability field inside the reservoir is parametrised with an
unsupervised learning approach, namely K-means with Singular Value
Decomposition (K-SVD). This is combined with the Orthogonal Matching Pursuit
(OMP) technique which is very typical for sparsity promoting regularisation
schemes. Moreover, seismic attributes, in particular, acoustic impedance, are
parametrised with the Discrete Cosine Transform (DCT). This novel combination
of techniques from machine learning, sparsity regularisation, seismic imaging
and history matching aims to address the ill-posedness of the inversion of
historical production data efficiently using ES-MDA. In the numerical
experiments provided, I demonstrate that these sparse representations of the
petrophysical properties and the seismic attributes enables to obtain better
production data matches to the true production data and to quantify the
propagating waterfront better compared to more traditional methods that do not
use comparable parametrisation techniques
Ultra-fast Deep Mixtures of Gaussian Process Experts
Mixtures of experts have become an indispensable tool for flexible modelling
in a supervised learning context, and sparse Gaussian processes (GP) have shown
promise as a leading candidate for the experts in such models. In the present
article, we propose to design the gating network for selecting the experts from
such mixtures of sparse GPs using a deep neural network (DNN). This combination
provides a flexible, robust, and efficient model which is able to significantly
outperform competing models. We furthermore consider efficient approaches to
computing maximum a posteriori (MAP) estimators of these models by iteratively
maximizing the distribution of experts given allocations and allocations given
experts. We also show that a recently introduced method called
Cluster-Classify-Regress (CCR) is capable of providing a good approximation of
the optimal solution extremely quickly. This approximation can then be further
refined with the iterative algorithm
Cluster, Classify, Regress: A General Method For Learning Discountinous Functions
This paper presents a method for solving the supervised learning problem in
which the output is highly nonlinear and discontinuous. It is proposed to solve
this problem in three stages: (i) cluster the pairs of input-output data
points, resulting in a label for each point; (ii) classify the data, where the
corresponding label is the output; and finally (iii) perform one separate
regression for each class, where the training data corresponds to the subset of
the original input-output pairs which have that label according to the
classifier. It has not yet been proposed to combine these 3 fundamental
building blocks of machine learning in this simple and powerful fashion. This
can be viewed as a form of deep learning, where any of the intermediate layers
can itself be deep. The utility and robustness of the methodology is
illustrated on some toy problems, including one example problem arising from
simulation of plasma fusion in a tokamak.Comment: 12 files,6 figure